Logo

Armand.nz

Home / About / Linkedin / Github

Install NGINX Ingress Controller with Cilium

#nginx #ingress #cilium |

This is a follow up blog post, after installing Cilium, see Kubernetes Cluster on Raspberry Pi using Ubuntu 22.04 LTS, K3s, and Cilium!

Ingress services are key in enabling functionalities such as path-based routing, TLS termination, and consolidating multiple services under a single load-balancer IP. Having spent considerable time at NGINX and F5, my allegiance leans towards NGINX, which I regard as the premier Ingress Controller for production-grade workloads.

Lately, my explorations have led me to delve deeper into Cilium. The L2Announcement and LoadBalancer IP Address Management (LB IPAM) features of Cilium, which have been game-changers, effectively replacing my use of MetalLB. Furthermore, Cilium now offers a fully compliant Kubernetes Ingress implementation right out of the box, which is more than capable of handling straightforward use cases.

However, I am not prepared to abandon NGINX within the Kubernetes ecosystem. Hence, this post is geared towards serving as a personal quick-start guide for anyone interested in installing the NGINX Ingress Controller in a Kubernetes cluster equipped with Cilium Networking.

NGINX Ingress Controller

This is not a complete guide and tutorial on how to operate it. For that, check out these resources:

NGINX Ingress Controller is a production‑grade Ingress controller (daemon) that runs alongside NGINX in a Kubernetes environment. The daemon monitors NGINX Ingress and Kubernetes Ingress resources to discover service requests that require ingress load balancing.

NGINX vs. Kubernetes Community Ingress Controller

It’s important to clarify there are two versions of the NGINX ingress Controller:

  1. Community version: Kubernetes/ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer; this is the open-source community-maintained project. The community Ingress controller is maintained by the Kubernetes community, with support from F5 NGINX to assist in managing the project.
  2. NGINX version: This version can be found in the nginxinc/kubernetes-ingress. The NGINX Ingress Controller is developed and maintained by F5 NGINX and has documentation on docs.nginx.com. For more information, check out NGINX vs. Kubernetes Community Ingress Controller. It is offered in two editions:
    1. NGINX Open Source-based: This edition is free and open source.
    2. NGINX Plus-based: This edition is a commercial option.

NGINX Quick Install using the Helm Chart

  1. Add the required Helm Repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  1. Since I am deploying the ingress on my k3s cluster with Cilium L2 Announcements and LB IPAM will use this values file , also shown below.
controller:
  service:
    externalTrafficPolicy: Local
    type: LoadBalancer
    annotations: 
      io.cilium/lb-ipam-ips: "192.168.111.200" # Static IP Assignment
  metrics:
    enabled: true
    #serviceMonitor: # Uncheck only if Prometheus metrics is installed 
      #enabled: true # Uncheck only if Prometheus metrics is installed

Explanation:

  1. We can install a specific version into a specific namespace, e.g., version 4.5.2 of the helm chart into the namespace nginx-ingress
export INGRESS_NGINX_VERSION=4.5.2
export INGRESS_NGINX_NAMESPACE=nginx-ingress
export INGRESS_NGINX_RELEASE_NAME=nginx-ingress

helm upgrade --install $INGRESS_NGINX_RELEASE_NAME \
    ingress-nginx/ingress-nginx \
    -n $INGRESS_NGINX_NAMESPACE \
    --version $INGRESS_NGINX_VERSION \
    --create-namespace \
    --set rbac.create=true  \
    --values deployments/ingress-nginx/values.yaml
  1. Confirm that the helm chart has been deployed:
helm list -A | grep nginx-ingress

# Example output
nginx-ingress   nginx-ingress   1               2024-01-11 13:17:08.895650832 -0600 CST deployed        ingress-nginx-4.5.2   1.6.4

  1. watch the pods and services deployed in the namespace
export INGRESS_NGINX_NAMESPACE=nginx-ingress
kubectl get all -n $INGRESS_NGINX_NAMESPACE

NAME                                                          READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-ingress-nginx-controller-8584b9bc56-jhkp8   1/1     Running   0          173m

NAME                                                       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
service/nginx-ingress-ingress-nginx-controller-metrics     ClusterIP      10.43.142.192   <none>            10254/TCP                    173m
service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.43.255.216   <none>            443/TCP                      173m
service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.43.103.51    192.168.111.200   80:31441/TCP,443:31994/TCP   173m

NAME                                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-ingress-nginx-controller   1/1     1            1           173m

NAME                                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-ingress-ingress-nginx-controller-8584b9bc56   1         1         1       173m

Deploy a sample application and see it in action

We can use some applications to demonstrate NGINX ingress load balancing in action.

See this all-in-one manifest that deploys moon and sun web applications; both are simply NGINX web servers presenting a web page. There is a Service for each application; a service is a logical group of pods together and provides network connectivity. A service keeps track of Pod Endpoints (IP address and ports) we can connect to.

The manifest looks like this:

apiVersion: v1
kind: Namespace
metadata:
  name: solar-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: moon
  namespace: solar-system
spec:
  replicas: 4
  selector:
    matchLabels:
      app: moon
  template:
    metadata:
      labels:
        app: moon
    spec:
      containers:
        - name: moon
          image: armsultan/solar-system:moon-nonroot
          imagePullPolicy: Always
          # resources:
          #   limits:
          #     cpu: "1"
          #     memory: "200Mi"
          #   requests:
          #     cpu: "0.5"
          #     memory: "100Mi"
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: moon-svc
  namespace: solar-system
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: moon
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sun
  namespace: solar-system
spec:
  replicas: 4
  selector:
    matchLabels:
      app: sun
  template:
    metadata:
      labels:
        app: sun
    spec:
      containers:
        - name: sun
          image: armsultan/solar-system:sun-nonroot
          imagePullPolicy: Always
          # resources:
          #   limits:
          #     cpu: "1"
          #     memory: "200Mi"
          #   requests:
          #     cpu: "0.5"
          #     memory: "100Mi"
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: sun-svc
  namespace: solar-system
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: sun
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: solarsystem-ingress
  namespace: solar-system
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  ingressClassName: nginx # use only with k8s version >= 1.18.0
  rules:
    - host: sun.lab.armand.nz
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: sun-svc
                port:
                  number: 80
    - host: moon.lab.armand.nz
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: moon-svc
                port:
                  number: 80
  1. Download that manifest file and apply it to the cluster using kubectl
curl https://gist.githubusercontent.com/armsultan/3b88590c03a25decad8443f30525dc5a/raw/b34d428677c80e83914043b7198cff916f65d43e/moon-sun-all-in-one-ingress.yaml -o moon-sun-all-in-one-ingress.yaml

kubectl apply -f moon-sun-all-in-one-ingress.yaml
  1. Check for the ingress’s external IP address on the loadbalancer
kubectl get ingress -n solar-system
 
NAME                  CLASS   HOSTS                                  ADDRESS           PORTS   AGE
solarsystem-ingress   nginx   sun.lab.armand.nz,moon.lab.armand.nz   192.168.111.200   80      171m
  1. Test external access using a web browser or with curl in your terminal
curl  http://192.168.111.200 -s   -H "Host: moon.lab.armand.nz" | grep \<title
<title>The Moon</title>

curl  http://192.168.111.200 -s   -H "Host: sun.lab.armand.nz" | grep \<title
<title>The Sun</title>
  1. Using a web browser, you may need to add host file entries in your host file first
sudo echo "192.168.111.200 moon.lab.armand.nz sun.lab.armand.nz" >> /etc/hosts

the moon test page the sun test page

The Sun

Uninstall

  1. To remove the sun and moon apps in the solar-system we can use kubectl
kubectl delete -f moon-sun-all-in-one-ingress.yaml

# OR, just delete the solar-system namespace
kubectl delete namespace solar-system
  1. To uninstall NGINX using helm, run the helm delete command

export INGRESS_NGINX_NAMESPACE=nginx-ingress
export INGRESS_NGINX_RELEASE_NAME=nginx-ingress

helm delete $INGRESS_NGINX_RELEASE_NAME -n $INGRESS_NGINX_NAMESPACE
comments powered byDisqus

Copyright © Armand